Escaping from an attractor: Importance sampling and rest points I

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improving Sparse Associative Memories by Escaping from Bogus Fixed Points

The Gripon-Berrou neural network (GBNN) is a recently invented recurrent neural network embracing a LDPClike sparse encoding setup which makes it extremely resilient to noise and errors. A natural use of GBNN is as an associative memory. There are two activation rules for the neuron dynamics, namely SUM-OF-SUM and SUM-OF-MAX. The latter outperforms the former in terms of retrieval rate by a hug...

متن کامل

Escaping From Saddle Points - Online Stochastic Gradient for Tensor Decomposition

We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that from an arbit...

متن کامل

Slow Escaping Points of Meromorphic Functions

We show that for any transcendental meromorphic function f there is a point z in the Julia set of f such that the iterates fn(z) escape, that is, tend to ∞, arbitrarily slowly. The proof uses new covering results for analytic functions. We also introduce several slow escaping sets, in each of which fn(z) tends to ∞ at a bounded rate, and establish the connections between these sets and the Juli...

متن کامل

A Generic Approach for Escaping Saddle points

A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use second-order methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them ...

متن کامل

An Adaptive Importance Sampling Technique

This paper proposes a new adaptive importance sampling (AIS) technique for approximate evaluation of multidimensional integrals. Whereas known AIS algorithms try to find a sampling density that is approximately proportional to the integrand, our algorithm aims directly at the minimization of the variance of the sample average estimate. Our algorithm uses piecewise constant sampling densities, w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: The Annals of Applied Probability

سال: 2015

ISSN: 1050-5164

DOI: 10.1214/14-aap1064